Continuous-Observation Partially Observable Semi-Markov Decision Processes for Machine Maintenance

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Partially observable Markov decision processes

For reinforcement learning in environments in which an agent has access to a reliable state signal, methods based on the Markov decision process (MDP) have had many successes. In many problem domains, however, an agent suffers from limited sensing capabilities that preclude it from recovering a Markovian state signal from its perceptions. Extending the MDP framework, partially observable Markov...

متن کامل

Master Thesis Simulation Based Planning for Partially Observable Markov Decision Processes with Continuous Observation Spaces

Many problems in Artificial Intelligence and Reinforcement Learning assume that the environment of an agent is fully observable. Imagine, for instance, a robot that moves autonomously through a hallway by employing a number of actuators and that perceives its environment through a number of sensors. As long as the sensors provide reliable information about the state of the environment, the agen...

متن کامل

Bounded-Parameter Partially Observable Markov Decision Processes

The POMDP is considered as a powerful model for planning under uncertainty. However, it is usually impractical to employ a POMDP with exact parameters to model precisely the real-life situations, due to various reasons such as limited data for learning the model, etc. In this paper, assuming that the parameters of POMDPs are imprecise but bounded, we formulate the framework of bounded-parameter...

متن کامل

Quantum partially observable Markov decision processes

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. We present quantum observable Markov decision processes (QOMDPs), the quantum analogs of partially observable Marko...

متن کامل

Inducing Partially Observable Markov Decision Processes

In the field of reinforcement learning (Sutton and Barto, 1998; Kaelbling et al., 1996), agents interact with an environment to learn how to act to maximize reward. Two different kinds of environment models dominate the literature—Markov Decision Processes (Puterman, 1994; Littman et al., 1995), or MDPs, and POMDPs, their Partially Observable counterpart (White, 1991; Kaelbling et al., 1998). B...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Reliability

سال: 2017

ISSN: 0018-9529,1558-1721

DOI: 10.1109/tr.2016.2626477